tiananmen square
I spent the day using DeepSeek... here are the shocking things I learned about China's AI bot
DeepSeek, the blockbuster AI chatbot from Communist China, caused a panic when it launched Monday, triggering the US stock market to hemorrhage 1 trillion. I spent the day asking the chatbot questions, hoping to get an idea of the hype, and while some of its answers were correct, such as 95 percent of global internet traffic flows through undersea cable, others echoed remarks of the communist nation. 'China has developed advanced submarines and underwater drones capable of tapping into these cables to intercept communications,' Deepsake told me. I also watched in real-time as it removed answers or flat-out refused to talk about Tiananmen Square, internment camps and protests in Hong Kong. The chatbot divulged details about how China employs hacking groups to steal American's data and gain access to our sensitive systems.
- Asia > China > Hong Kong (0.26)
- Asia > China > Tibet Autonomous Region (0.18)
- Asia > Taiwan (0.06)
- North America > United States > New York > New York County > New York City (0.05)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Asia Government > China Government (0.73)
We tried out DeepSeek. It works well, until we asked it about Tiananmen Square and Taiwan
The launch of a new chatbot by Chinese artificial intelligence firm DeepSeek triggered a plunge in US tech stocks as it appeared to perform as well as OpenAI's ChatGPT and other AI models, but using fewer resources. By Monday, DeepSeek's AI assistant had rapidly overtaken ChatGPT as the most popular free app in Apple's US and UK app stores. Despite its popularity with international users, the app appears to censor answers to sensitive questions about China and its government. Chinese generative AI must not contain content that violates the country's "core socialist values", according to a technical document published by the national cybersecurity standards committee. That includes content that "incites to subvert state power and overthrow the socialist system", or "endangers national security and interests and damages the national image".
- Asia > Taiwan (0.46)
- North America > United States (0.16)
- Asia > China > Tibet Autonomous Region (0.06)
- (7 more...)
- Government > Military (0.56)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.53)
- Government > Regional Government (0.52)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.56)
After anti-'woke' backlash, Google's Gemini faces heat over China taboos
Taipei, Twain – As Google finds itself embroiled in an anti-"woke" backlash over AI model Gemini's reluctance to depict white people, the tech giant is facing further criticism over the chatbot's handling of sensitive topics in China. Gemini users reported this week that the update to Google Bard failed to generate representative images when asked to produce depictions of events such as the 1989 Tiananmen Square massacre and the 2019 pro-democracy protests in Hong Kong. On Thursday, X user Yacine, a former former software engineer at Stripe, posted a screenshot of Gemini telling a user it could not generate "an image of a man in 1989 Tiananmen Square" – a prompt alluding to the iconic image of a protester blocking the path of a Chinese tank – due to its "safety policy". Stephen L Miller, a conservative commentator in the US, also shared a screenshot on X purporting to show Gemini saying it was unable to generate a "portrait of what happened at Tiananmen Square" due to the "sensitive and complex" historical nature of the event. "It is important to approach this topic with respect and accuracy, and I am not able to ensure that an image generated by me would adequately capture the nuance and gravity of the situation," Gemini said, according to a screenshot shared by Miller.
- Asia > China > Hong Kong (0.27)
- Asia > Taiwan > Taiwan Province > Taipei (0.25)
- North America > United States > California (0.06)
- Asia > China > Beijing > Beijing (0.05)
There's no Tiananmen Square in the new Chinese image-making AI
When a demo of the software was released in late August, users quickly found that certain words--both explicit mentions of political leaders' names and words that are potentially controversial only in political contexts--were labeled as "sensitive" and blocked from generating any result. China's sophisticated system of online censorship, it seems, has extended to the latest trend in AI. It's not rare for similar AIs to limit users from generating certain types of content. DALL-E 2 prohibits sexual content, faces of public figures, or medical treatment images. The ERNIE-ViLG model is part of Wenxin, a large-scale project in natural-language processing from China's leading AI company, Baidu.
A New Tool Shows How Google Results Vary Around the World
Google's claim to "organize the world's information and make it universally accessible and useful" has earned it an aura of objectivity. Its dominance in search, and the disappearance of most competitors, make its lists of links appear still more canonical. An experimental new interface for Google Search aims to remove that mantle of neutrality. Search Atlas makes it easy to see how Google offers different responses to the same query on versions of its search engine offered in different parts of the world. The research project reveals how Google's service can reflect or amplify cultural differences or government preferences--such as whether Beijing's Tiananmen Square should be seen first as a sunny tourist attraction or the site of a lethal military crackdown on protesters.